Goto

Collaborating Authors

 malicious purpose


The risk and reward of ChatGPT in cybersecurity

#artificialintelligence

Unless you've been on a retreat in some far-flung location with no internet access for the past few months, chances are you're well aware of how much hype and fear there's been around ChatGPT, the artificial intelligence (AI) chatbot developed by OpenAI. Maybe you've seen articles about academics and teachers worrying that it'll make cheating easier than ever. On the other side of the coin, you might have seen the articles evangelising all of ChatGPT's potential applications. Alternatively, you may have been tickled by some of the more esoteric examples of people using the tool. One user, for example, got it to write an instruction guide for removing peanut butter sandwiches from a VCR in the style of the King James Bible.


Breaking the Deepfake Spell: The Magic of Blockchain, NFTs, and Machine Learning

#artificialintelligence

The rise of deepfakes, digitally manipulated content created using artificial intelligence, poses a significant threat to the authenticity of digital media. To combat this issue, blockchain, NFTs, and machine learning are potentially the main pillars to authenticate and verify digital content. By combining these technologies, a more robust, sustainable, and decentralized approach can be created to counter deepfakes. Blockchain technology provides a secure and tamper-proof way to verify the authenticity of digital media. By storing it on a distributed ledger, it becomes impossible to alter or manipulate the original content. This technology can be applied to both image and video content, ensuring that every step in the creation, distribution, and storage process is secured.


Russian Hackers Try to Bypass ChatGPT's Restrictions For Malicious Purposes - Infosecurity Magazine

#artificialintelligence

Russian cyber-criminals have been observed on dark web forums trying to bypass OpenAI's API restrictions to gain access to the ChatGPT chatbot for nefarious purposes. Various individuals have been observed, for instance, discussing how to use stolen payment cards to pay for upgraded users on OpenAI (thus circumventing the limitations of free accounts). Others have created blog posts on how to bypass the geo controls of OpenAI, and others still have created tutorials explaining how to use semi-legal online SMS services to register to ChatGPT. "Generally, there are a lot of tutorials in Russian semi-legal online SMS services on how to use it to register to ChatGPT, and we have examples that it is already being used," wrote Check Point Research (CPR), which shared the findings with Infosecurity ahead of publication. "It is not extremely difficult to bypass OpenAI's restricting measures for specific countries to access ChatGPT," said Sergey Shykevich, threat intelligence group manager at Check Point Software Technologies.


Hackers Exploiting ChatGPT To Write Malicious Codes To Steal Your Data

#artificialintelligence

New Delhi, Jan 8 (IANS) Artificial intelligence (AI)-driven ChatGPT, that gives human-like answers to questions, is also being used by cyber criminals to develop malicious tools that can steal your data, a report has warned. The first such instances of cybercriminals using ChatGPT to write malicious codes have been spotted by Check Point Research (CPR) researchers. In underground hacking forums, threat actors are creating "infostealers", encryption tools and facilitating fraud activity. The researchers warned of the fast-growing interest in ChatGPT by cybercriminals to scale and teach malicious activity. In recent weeks, we're seeing evidence of hackers starting to use it to write malicious code.


Deepfake Fiascos Of 2020 That Made Headlines

#artificialintelligence

Deepfakes are indeed scary and have managed to strike a nerve for many, especially the ones being victimised for this sophisticated technology. Not only has it become a worldwide concern for many due to its influential impact on election campaigns but also made people anxious due to the criminal activity associated with it. With easily accessible deepfake making tools available for anybody to use and advancements in GANs has made it relatively easy for notorious minds to create these eerie-looking unreal AI-generated videos and images. Such improvement and accessibility has in turn increased the number of deepfake incidents in recent times. Some of them are so incredibly convincing that they manage to surpass the original videos. This news showcased one of the weirder applications of deep fakes, that used artificial intelligence to manipulate an audio-visual content -- a less heard usage, termed as audio deepfake scam.


The cybersecurity battle of the future – AI vs. AI

#artificialintelligence

Artificial intelligence and machine learning continue to gain a foothold in our everyday lives. Whether for complex tasks like computer vision and natural language processing, or something as basic as an online chatbot, their popularity shows no signs of slowing. Companies have also started to explore deep learning, which is an advanced subset of machine learning. By applying "deep neural networks" deep learning takes inspiration from how the human brain works. Unlike machine learning, deep learning can actually train its processes directly on raw data, requiring little to no human intervention.


What Holding Back Machine Learning in Healthcare - Amit Ray

#artificialintelligence

What is holding back the large scale implementation of machine learning systems in healthcare and precision medicine? In this article Dr. Amit Ray, explains the key obstacles and challenges of implementing large-scale machine learning systems in healthcare. Dr. Ray argued that lack of deeper integration, incomplete understanding of the underlying molecular processes of disease it is intended to treat, may limit the progress of implementing large-scale machine learning based reliable systems in healthcare. Here, nine obstacles of present day machine learning systems in healthcare are discussed. Recently, machine learning algorithms, especially deep learning has shown impressive performance in many areas of medical science, especially in classifying imaging data in different clinical domains. In academic environment, Deep learning and Reinforcement learning methods of Artificial Intelligence (AI) has shown tremendous success in numerous clinical areas such as: Omics data integration (such as genomics, proteomics or metabolomics), prediction of drug-disease correlation based on gene expression, and finding combinations of drugs that should not be taken together.


How AI can be used for Malicious Purposes - Deep Instinct

#artificialintelligence

In recent years, deep learning and machine learning have gained traction in so many areas that have a direct positive effect on our lives as well as complex tasks such as computer vision (image recognition), machine translation, and natural language processing. And with like so many other technologies that are changing our lives for good, it has the destructive potential to change it for bad, there is no reason why it won't also be used for malicious activities as well. Up until now, we haven't seen the use of AI for malicious activities in cybersecurity due to the high costs, lack of skills and the tools available. But just like any other technology, it's a matter of time before it happens in cybersecurity. Think about what would happen when attackers start using the power of deep learning and machine learning for their advantage?


Artificial Intelligence and Cybersecurity: Attacking and Defending

#artificialintelligence

Cybersecurity is a manpower constrained market – therefore, the opportunities for artificial intelligence (AI) automation are vast. Frequently, AI is used to make certain defensive aspects of cyber security more wide-reaching and effective. Combating spam and detecting malware are prime examples. On the opposite side, there are many incentives to use AI when attempting to attack vulnerable systems belonging to others. These incentives include the speed of attack, low costs and difficulties attracting skilled staff in an already constrained environment.


Artificial Intelligence and Cybersecurity: Attacking and Defending

#artificialintelligence

Cybersecurity is a manpower constrained market – therefore, the opportunities for artificial intelligence (AI) automation are vast. Frequently, AI is used to make certain defensive aspects of cyber security more wide-reaching and effective. Combating spam and detecting malware are prime examples. On the opposite side, there are many incentives to use AI when attempting to attack vulnerable systems belonging to others. These incentives include the speed of attack, low costs and difficulties attracting skilled staff in an already constrained environment.